Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)
Current Result Document :
ÇѱÛÁ¦¸ñ(Korean Title) |
Æ®·£½ºÆ÷¸Ó ±â¹Ý Çѱ¹¾î ÅؽºÆ® ¿ä¾à ¸ðµ¨ÀÇ ¼øÂ÷Àû ¹®¸Æ ÇнÀ ¿µÇ⼺ ºÐ¼® |
¿µ¹®Á¦¸ñ(English Title) |
Analyzing the Impact of Sequential Context Learning on the Transformer Based Korean Text Summarization Model |
ÀúÀÚ(Author) |
±è¼öºó
±è¿ëÁØ
¹æÁؼº
Subin Kim
Yongjun Kim
Junseong Bang
|
¿ø¹®¼ö·Ïó(Citation) |
VOL 48 NO. 10 PP. 1097 ~ 1104 (2021. 10) |
Çѱ۳»¿ë (Korean Abstract) |
ÅؽºÆ® ¿ä¾à ±â¼úÀº Àüü ÅؽºÆ® ³»¿ëÀÌ °¡Áö´Â Àǹ̸¦ À¯ÁöÇϸ鼵µ ÅؽºÆ®ÀÇ ±æÀ̸¦ ÁÙ¿©, Á¤º¸ °úÀûÀç ¹®Á¦¸¦ ÇØ°áÇÏ°í µ¶ÀÚÀÇ ºü¸¥ Á¤º¸ ¼Òºñ¸¦ µ½´Â´Ù. À̸¦ À§ÇØ Æ®·£½ºÆ÷¸Ó ±â¹ÝÀÇ ¿µ¾î ÅؽºÆ® ¿ä¾à ¸ðµ¨¿¡ ´ëÇÑ ¿¬±¸°¡ È°¹ßÈ÷ ÁøÇàµÇ°í ÀÖ´Ù. ÃÖ±Ù¿¡´Â RNN ±â¹ÝÀÇ ÀÎÄÚ´õ¸¦ Ãß°¡ÇÏ¿© °íÁ¤µÈ ¾î¼øÀ» °®´Â ¿µ¾îÀÇ Æ¯¼ºÀ» ¹Ý¿µÇÑ Ãß»ó ÅؽºÆ® ¿ä¾à ¸ðµ¨ÀÌ Á¦¾ÈµÇ±âµµ Çß´Ù. º» ³í¹®Àº ¿µ¾îº¸´Ù ÀÚÀ¯·Î¿î ¾î¼øÀ» °®´Â Çѱ¹¾î¿¡ ´ëÇØ RNN ±â¹ÝÀÇ ÀÎÄÚ´õ¸¦ ÀÌ¿ëÇÏ¿©, ÅؽºÆ® Ãß»ó ¿ä¾à ¸ðµ¨¿¡ ¼øÂ÷Àû ¹®¸Æ ÇнÀÀÌ ¾î¶°ÇÑ ¿µÇâÀ» ¹ÌÄ¡´ÂÁö ¿¬±¸ÇÏ¿´´Ù. Á÷Á¢ ¼öÁýÇÑ Çѱ¹¾î ±â»ç¿¡ ´ëÇØ Æ®·£½ºÆ÷¸Ó ±â¹Ý ¸ðµ¨°ú ±âÁ¸ Æ®·£½ºÆ÷¸Ó¿¡ RNN ±â¹Ý ÀÎÄÚ´õ¸¦ Ãß°¡ÇÑ ¸ðµ¨À» ÇнÀÇÏ¿© Á¦¸ñ »ý¼º ¹× ±â»ç ³»¿ë ¿ä¾à ¼º´ÉÀ» ºÐ¼®ÇÏ¿´´Ù. ½ÇÇè °á°ú, RNN ±â¹ÝÀÇ ÀÎÄÚ´õ¸¦ Ãß°¡ÇÑ ¸ðµ¨ÀÌ ´õ ³ôÀº ¼º´ÉÀ» º¸¿´À¸¸ç, Çѱ¹¾î ÅؽºÆ®ÀÇ Ãß»ó ¿ä¾à ¼öÇà ½Ã, ¼øÂ÷ÀûÀÎ ¹®¸Æ ÇнÀÀÌ ÇÊ¿äÇÔÀ» È®ÀÎÇÏ¿´´Ù. |
¿µ¹®³»¿ë (English Abstract) |
Text summarization reduces the sequence length while maintaining the meaning of the entire article body, solving the problem of overloading information and helping readers consume information quickly. To this end, research on a Transformer-based English text summarization model has been actively conducted. Recently, an abstract text summary model reflecting the characteristics of English with a fixed word order by adding a Recurrent Neural Network (RNN)-based encoder was proposed. In this paper, we study the effect of sequential context learning on the text abstract summary model by using an RNN-based encoder for Korean, which has more free word order than English. Transformer-based model and a model that added RNN-based encoder to existing Transformer model are trained to compare the performance of headline generation and article body summary for the Korean articles collected directly. Experiments show that the model performs better when the RNN-based encoder is added, and that sequential contextual information learning is required for Korean abstractive text summarization. |
Å°¿öµå(Keyword) |
Çѱ¹¾î ÅؽºÆ® ¿ä¾à
Á¦¸ñ »ý¼º
¾îÅÙ¼Ç ¸ÞÄ¿´ÏÁò
Æ®·£½ºÆ÷¸Ó ¸ðµ¨
Korean text summarization
headline generation
attention mechanism
transformer model
|
ÆÄÀÏ÷ºÎ |
PDF ´Ù¿î·Îµå
|